Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract Both deaf and hearing readers use morphological awareness skills to decode and comprehend printed English. Deaf readers, for whom phonological awareness is a relative weakness while orthographic sensitivity is a strength, may have a different relationship with morphology than similarly skilled hearing readers. This study investigated the impact of various reading sub-skills—spelling, vocabulary size, morphological awareness, and phonological awareness—on reading comprehension for deaf and hearing adult readers. Morphological awareness had a stronger relationship with reading comprehension for deaf than hearing readers, particularly for deaf readers with advanced morphological skills. Morphology and vocabulary were also more strongly related for the deaf group, indicating that deaf readers leverage morphology to expand their word knowledge. Overall, the findings highlight the unique and significant role of morphological awareness in the skilled deaf reader’s “toolbox” and underscore the importance of morphological instruction in supporting the reading development of deaf individuals.more » « less
- 
            The view that words are arbitrary is a foundational assumption about language, used to set human languages apart from nonhuman communication. We present here a study of the alignment between the semantic and phonological structure (systematicity) of American Sign Language (ASL), and for comparison, two spoken languages—English and Spanish. Across all three languages, words that are semantically related are more likely to be phonologically related, highlighting systematic alignment between word form and word meaning. Critically, there is a significant effect of iconicity (a perceived physical resemblance between word form and word meaning) on this alignment: words are most likely to be phonologically related when they are semantically related and iconic. This phenomenon is particularly widespread in ASL: half of the signs in the ASL lexicon areiconicallyrelated to other signs, i.e., there is a nonarbitrary relationship between form and meaning that is shared across signs. Taken together, the results reveal that iconicity can act as a driving force behind the alignment between the semantic and phonological structure of spoken and signed languages, but languages may differ in the extent that iconicity structures the lexicon. Theories of language must account for iconicity as a possible organizing principle of the lexicon.more » « lessFree, publicly-accessible full text available April 22, 2026
- 
            Free, publicly-accessible full text available February 10, 2026
- 
            Theories of reading posit that decisions about “where” and “when” to move the eyes are driven by visual and linguistic factors, extracted from the perceptual span and word identification span, respectively. We tested this hypothesized dissociation by masking, outside of a visible window, either the spaces between the words (to assess the perceptual span, Experiment 1) or the letters within the words (to assess the word identification span, Experiment 2). We also investigated whether deaf readers’ previously reported larger reading span was specifically linked to one of these spans. We analyzed reading rate to test overall reading efficiency, as well as average saccade length to test “where” decisions and average fixation duration to test “when” decisions. Both hearing and deaf readers’ perceptual spans extended between 10 and 14 characters, and their word identification spans extended to eight characters to the right of fixation. Despite similar sized rightward spans, deaf readers read more efficiently overall and showed a larger increase in reading rate when leftward text was available, suggesting they attend more to leftward information. Neither rightward span was specifically related to where or when decisions for either group. Our results challenge the assumed dissociation between type of reading span and type of saccade decision and indicate that reading efficiency requires access to both perceptual and linguistic information in the parafovea.more » « less
- 
            Little is known about how information to the left of fixation impacts reading and how it may help to integrate what has been read into the context of the sentence. To better understand the role of this leftward information and how it may be beneficial during reading, we compared the sizes of the leftward span for reading-matched deaf signers ( n = 32) and hearing adults ( n = 40) using a gaze-contingent moving window paradigm with windows of 1, 4, 7, 10, and 13 characters to the left, as well as a no-window condition. All deaf participants were prelingually and profoundly deaf, used American Sign Language (ASL) as a primary means of communication, and were exposed to ASL before age eight. Analysis of reading rates indicated that deaf readers had a leftward span of 10 characters, compared to four characters for hearing readers, and the size of the span was positively related to reading comprehension ability for deaf but not hearing readers. These findings suggest that deaf readers may engage in continued word processing of information obtained to the left of fixation, making reading more efficient, and showing a qualitatively different reading process than hearing readers.more » « less
- 
            Corina, David P. (Ed.)Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL–English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL–English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.more » « less
- 
            Abstract The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.more » « less
- 
            Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
